15 research outputs found

    Service de configuration prédictif pour plateforme multicoeur reconfigurable hétérogène

    Get PDF
    Cet article décrit un service de gestion des reconfigurations par préchargement prédictif utile pour les architectures multicoeur composées de coeurs reconfigurables hétérogènes. Le but est de masquer les latences de reconfiguration dues aux transferts de bitstreams de grande taille, pour ainsi permettre une plus grande dynamicité de reconfiguration. Nous présentons l'implémentation logicielle du service de préchargement et sa validation fonctionnelle. L'architecture du projet européen Morpheus est utilisée comme exemple pour faire cette validation: nous montrons comment, sur des graphes d'applications simplifiés, masquer complètement le surcoût de la reconfiguration

    Reconnaissance faciale basée sur les ondelettes robuste et optimisée pour les systèmes embarqués

    Get PDF
    National audienceDans le domaine du traitement d’images, la reconnaissance faciale est une technique appliquée dans de nombreuses applications : télésurveillance, accès à des zones restreintes, déverrouillage de systèmes électroniques, etc. Dans ce contexte, cette contribution propose une méthode rapide de reconnaissance faciale basée sur la transformée en ondelettes robuste aux variations de position et de luminosité pour des applications temps réel. La méthode proposée a une tolérance de +/- 10% aux variations de position avec des conditions de luminosité variables. Sur une plateforme embarquée type RaspberryPi, le temps de reconnaissance moyen est de 26 ms par visage avec une empreinte mémoire 64 fois plus faible que l’approche de référence et des taux de reconnaissance équivalents

    A survey on real-time 3D scene reconstruction with SLAM methods in embedded systems

    Full text link
    The 3D reconstruction of simultaneous localization and mapping (SLAM) is an important topic in the field for transport systems such as drones, service robots and mobile AR/VR devices. Compared to a point cloud representation, the 3D reconstruction based on meshes and voxels is particularly useful for high-level functions, like obstacle avoidance or interaction with the physical environment. This article reviews the implementation of a visual-based 3D scene reconstruction pipeline on resource-constrained hardware platforms. Real-time performances, memory management and low power consumption are critical for embedded systems. A conventional SLAM pipeline from sensors to 3D reconstruction is described, including the potential use of deep learning. The implementation of advanced functions with limited resources is detailed. Recent systems propose the embedded implementation of 3D reconstruction methods with different granularities. The trade-off between required accuracy and resource consumption for real-time localization and reconstruction is one of the open research questions identified and discussed in this paper

    REEFS (une architecture reconfigurable pour la stéréovision embarquée en contexte temps-réel)

    No full text
    La stéréovision permet d extraire la profondeur dans une scène par l analyse de deux images prises de points de vue différents. Dans le domaine de la vision par ordinateur, cette technique permet une mesure directe et précise de la distance des objets observés. Les systèmes d aide à la conduite (Advanced Driver Assistance Systems, ADAS) intègrent plusieurs applications nécessitant une connaissance précise de l environnement du véhicule et peuvent ainsi bénéficier de la stéréovision. Les traitements mis en œuvre s effectuent en temps-réel et nécessitent un niveau de performance très important et donc l utilisation d accélérateurs matériels. De plus, la solution matérielle mise en place doit posséder un niveau de flexibilité suffisant pour permettre l adaptation du traitement à la situation rencontrée. Enfin, le système devant être embarqué, la solution matérielle choisie doit avoir un coût en surface limité. Cette thèse a pour objectif la conception d une architecture de calcul pour la stéréovision qui offre un niveau de performance répondant aux contraintes des ADAS et un niveau de flexibilité suffisant pour fournir des cartes de profondeur adaptées aux diverses applications. Une architecture hétérogène reconfigurable, nommée REEFS (Reconfigurable Embedded Engine for Flexible Stereovision), est conçue et dimensionnée pour offrir le meilleur compromis entre flexibilité, performance et coût en surface.Stereovision allows the extraction of depth information from several images taken from different points of view. In computer vision, stereovision is used to evaluate directly and accurately the distance of objects. In Advanced Driver Assistance Systems (ADAS), number of applications needs an accurate knowledge of the surrounding and can thus benefit from 3D information provided by stereovision. Involved tasks are done in real-time and require a high level of performance that can be provided by hardware accelerators. Moreover, as people safety is affected, the reliability of results is critical. As a result, the hardware solution has to be flexible enough to allow this adaptation. Finally, as the embedded context is considered, the silicon area of the chosen hardware solution must be limited. The purpose of this thesis is to design a processing architecture for stereovision that provides a performance level answering ADAS requirements and a level of flexibility high enough to generate depth maps adapted to various applications. A heterogeneous reconfigurable architecture, named REEFS (Reconfigurable Embedded Engine for Flexible Stereovision), is designed and scaled to answer ADAS requirements and to provide the best trade-off between flexibility, performance and silicon area.RENNES1-BU Sciences Philo (352382102) / SudocSudocFranceF

    Etude quantitative dálgorithmes de stéréovision pour les systèmes embarqués d'aide à la conduite

    No full text
    National audienceCet article présente l'étude de différentes solutions algorithmiques pour la création de cartes de disparité dans un contexte d'applications embarquées d'aide à la conduite utilisant la stéréovision. Ces applications ayant de fortes contraintes temps-réel et embarqué, notre étude se base non seulement sur des critères qualitatifs (densité et taux d'erreurs), mais aussi sur des critères quantitatifs permettant d'estimer si le portage matériel est envisageable pour chaque algorithme

    Embedded wavelet-based face recognition under variable position

    Get PDF
    International audienceFor several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B *), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (2 2K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model)

    Architecture flexible pour la stéréovision embarquée

    Get PDF
    La stéréovision permet l’extraction de la profondeur d’une scène à partir des images de deux capteurs vidéo. Dans le domaine automobile, elle est principalement utilisée pour la détection et la localisation d’obstacles. Les avantages apporté

    Use of wavelet for image processing in smart cameras with low hardware resources

    No full text
    International audienceImages from embedded sensors need digital processing to recover high-quality images and to extract features of a scene. Depending on the properties of the sensor and on the application, the designer fits together different algorithms to process images. In the context of embedded devices, the hardware supporting those applications is very constrained in terms of power consumption and silicon area. Thus, the algorithms have to be compliant with the embedded specifications i.e. reduced computational complexity and low memory requirements. We investigate the opportunity to use the wavelet representation to perform good quality image processing algorithms at a lower computational complexity than using the spatial representation. To reproduce such conditions, demosaicing, denoising, contrast correction and classification algorithms are executed over several well known embedded cores (Leon3, Cortex A9 and DSP C6x). Wavelet-based image reconstruction shows higher image quality and lower computational complexity (3x) than usual spatial reconstruction. The use of wavelet decomposition also permits to increase the recognition rate of faces while decreasing computational complexity by a factor 25

    Image quantization towards data reduction: robustness analysis for SLAM methods on embedded platforms

    No full text
    PosterInternational audienceEmbedded simultaneous localization and mapping (SLAM) aims at providing real-time performances with restrictive hardware resources of advanced perception functions. Localization methods based on visible cameras include image processing functions that require frame memory management. This work reduces the dynamic range of input frame and evaluates the accuracy and robustness of real-time SLAM algorithms with quantified frames. We show that the input data can be reduced up to 62% and 75% while maintaining a similar trajectory error lower than 0.15m compared to full precision input images
    corecore